6 research outputs found

    A Deep Incremental Boltzmann Machine for Modeling Context in Robots

    Get PDF
    Context is an essential capability for robots that are to be as adaptive as possible in challenging environments. Although there are many context modeling efforts, they assume a fixed structure and number of contexts. In this paper, we propose an incremental deep model that extends Restricted Boltzmann Machines. Our model gets one scene at a time, and gradually extends the contextual model when necessary, either by adding a new context or a new context layer to form a hierarchy. We show on a scene classification benchmark that our method converges to a good estimate of the contexts of the scenes, and performs better or on-par on several tasks compared to other incremental models or non-incremental models.Comment: 6 pages, 5 figures, International Conference on Robotics and Automation (ICRA 2018

    Learning to Generate Unambiguous Spatial Referring Expressions for Real-World Environments

    Full text link
    Referring to objects in a natural and unambiguous manner is crucial for effective human-robot interaction. Previous research on learning-based referring expressions has focused primarily on comprehension tasks, while generating referring expressions is still mostly limited to rule-based methods. In this work, we propose a two-stage approach that relies on deep learning for estimating spatial relations to describe an object naturally and unambiguously with a referring expression. We compare our method to the state of the art algorithm in ambiguous environments (e.g., environments that include very similar objects with similar relationships). We show that our method generates referring expressions that people find to be more accurate (\sim30% better) and would prefer to use (\sim32% more often).Comment: International Conference on Intelligent Robots and Systems (IROS 2019), Demo 1: Finding the described object (https://youtu.be/BE6-F6chW0w), Demo 2: Referring to the pointed object (https://youtu.be/nmmv6JUpy8M), Supplementary Video (https://youtu.be/sFjBa_MHS98

    CINet: A Learning Based Approach to Incremental Context Modeling in Robots

    Get PDF
    There have been several attempts at modeling context in robots. However, either these attempts assume a fixed number of contexts or use a rule-based approach to determine when to increment the number of contexts. In this paper, we pose the task of when to increment as a learning problem, which we solve using a Recurrent Neural Network. We show that the network successfully (with 98\% testing accuracy) learns to predict when to increment, and demonstrate, in a scene modeling problem (where the correct number of contexts is not known), that the robot increments the number of contexts in an expected manner (i.e., the entropy of the system is reduced). We also present how the incremental model can be used for various scene reasoning tasks.Comment: The first two authors have contributed equally, 6 pages, 8 figures, International Conference on Intelligent Robots (IROS 2018

    Robotlarda hiyerarşik arttırımlı bağlam modellenmesi.

    No full text
    Context is very crucial for robots to be able to adapt themselves to circumstances and to fulfill their tasks accordingly. There have been many studies on modeling context on robots, however, these studies either do not construct an incremental and hierarchical structure (i.e., use a fixed number of contexts and context layers) or determine the necessity of adding a new context by using rule-based approaches. In this thesis, we propose two different methods to model context. In the first method, we extend the Restricted Boltzmann Machines, a generative associative model, by incrementing the number of contexts and context layers when needed. This model constructs the hierarchical and incremental contextual representations by considering the confidence of the objects and contexts after each new scene encountered. Moreover, this deep incremental model obtains better or on-par results when compared to the incremental or non-incremental models in the literature on different tasks. In the second method, in contrast to our first method and the methods in the literature, determining the necessity of adding a new context is formulated as a learning problem. In order to be able to do that, Latent Dirichlet Allocation (LDA) model is used to generate the data with known number of contexts. The intermediate LDA models with/without the correct number of contexts are then fed to a Recurrent Model, which is trained to predict whether to add a new context or not. Our analysis on artificial and real datasets demonstrate that such a learning-based approach generalizes well, and is a promising approach for solving such incremental problems.M.S. - Master of Scienc

    Learning to Increment A Contextual Model

    No full text
    In this paper, we summarized our efforts on incremental construction of latent variables in context (topic) models. With our models, an agent can incrementally learn a representation of critical contextual information. We demonstrated that a learning-based formulation outperforms rule-based models, and generalizes well across many settings and to real dat

    CINet: A Learning Based Approach to Incremental Context Modeling in Robots

    No full text
    There have been several attempts at modeling context in robots. However, either these attempts assume a fixed number of contexts or use a rule-based approach to determine when to increment the number of contexts. In this paper, we pose the task of when to increment as a learning problem, which we solve using a Recurrent Neural Network. We show that the network successfully (with 98% testing accuracy) learns to predict when to increment, and demonstrate, in a scene modeling problem (where the correct number of contexts is not known), that the robot increments the number of contexts in an expected manner (i.e., the entropy of the system is reduced). We also present how the incremental model can be used for various scene reasoning tasks
    corecore